Goto

Collaborating Authors

 ai black box


3 Ways To Effectively Demystify The AI Black Box - AI Summary

#artificialintelligence

Artificial intelligence has demonstrated immense promise when applying machine learning to support the overall processing of large datasets, particularly in the banking and financial services industry. Sixty percent of financial services companies have implemented at least one form of AI, ranging from virtual assistants communicating with customers and the automation of workflows to managing fraud and network security. This largely stems from lack of understanding of how the system works and a continual concern around opacity, unfair discrimination, ethics and dangers to privacy and autonomy. These biases can create increased consumer friction, poor customer service, fewer sales and revenue, unfair or illegal behaviors, and potential discrimination. Achieving trustworthy AI requires close examination and the ability to identify what factors contribute to each bias to make a more informed decision about what actions should be taken after identification.


3 ways to effectively demystify the AI black box

#artificialintelligence

Artificial intelligence has demonstrated immense promise when applying machine learning to support the overall processing of large datasets, particularly in the banking and financial services industry. Sixty percent of financial services companies have implemented at least one form of AI, ranging from virtual assistants communicating with customers and the automation of workflows to managing fraud and network security. Despite these advancements in efficiency and automation, complexities from the inner workings of AI models often create a "black box" issue. This largely stems from lack of understanding of how the system works and a continual concern around opacity, unfair discrimination, ethics and dangers to privacy and autonomy. In fact, the lack of transparency in system operation is frequently linked to hidden biases.


Thinking outside of the AI Black box.

#artificialintelligence

These same abilities humans are now trying to emulate with machines, and they are in fact the core components of Artificial Intelligence (AI), one of the most important technical developments of our era. This technology is transforming knowledge, work, governance and the core of our daily lives, and as the sophistication of these systems increases, especially with the advent of Deep Neural Networks (DNNs), I would argue the human understanding of these systems is decreasing. A need is rising to bring to this field the HCI (Human Computer Interaction) human centered design approach, and within this paper I will suggest the possibilities how art and creative thought together with HCI expertise, can help broaden the current spectrum of AI, it's accessibility and possibly be a joint venture to imagine what AI could become. As humans started developing their ability of self-introspection around 40 thousand years ago, they have used art to communicate, evoke emotions, recall past events and communicate. These cognitive abilities have helped humans survive and evolve as a species, putting into use tools of memory, language, understanding, reasoning, learning, pattern recognition and expression.


The explainability problem - can new approaches pry open the AI black box?

#artificialintelligence

The so-called "black-box" aspect of AI, usually referred to as the explainability problem, or X(AI) for short, arose slowly over the past few years. Still, with the rapid development in AI, it is now considered a significant problem. How can you trust a model if you cannot understand how it reaches its conclusions? For commercial benefits, for ethics concerns or regulatory considerations, X)(AI) is essential if users understand, appropriately trust, and effectively manage AI results. In researching this topic, I was surprised to find almost 400 papers on the subject.


All Models

#artificialintelligence

All Models are wrong, but some are useful. Mainstream AI discourse stresses the need for unbiased data and algorithms to ensure fair representation, but overlooks the intrinsic limits of any statistical technique. Machine learning is a statistical model of the world and we should question the way it operates, also statistically, in world-making. The statistical models of machine learning have silently become a new ubiquitous Kulturtechnik through which the perception of the world is increasingly mediated and jobs are automated. From face recognition and self-driving cars to automated decision making, AI constructs, fosters and controls statistical models of society.


Racist self-driving car scare debunked, inside AI black boxes, Google helps folks go with the TensorFlow...

#artificialintelligence

Roundup Hello, here's a quick recap on all the latest AI-related news beyond what we've already reported this week. You may have seen news reports that autonomous cars are unlikely to detect pedestrians crossing the road if they have dark skin, and thus run them over. And yes, the internal alarm bells in your head should be going off, as a closer look at the research behind the stories shows all those headlines screaming about racist AI are a little off the mark. The academic paper at the heart of the matter described a series of experiments testing different computer vision models, such as the Faster R-CNN model and R-50-FPN, on images of pedestrians with different skin tones. The study's authors, based at the Georgia Institute of Technology in the US, described how they paid humans to look through the collection of roughly 3,500 photos, and individually tag people in the snaps as either "LS" for light skin or "DS" for dark skin, and then trained the neural networks using this dataset.